skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Solovey, Erin T"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Our team of culturally Deaf ASL-signing and hearing non-signing HCI researchers conduct research with the Deaf community to create ASL resources. This case study summarizes reflections, learning, and challenges with HCI user study protocols based on our experience conducting five user studies with deaf ASL-signing participants. The case study offers considerations for researchers in this space related to conducting think-aloud protocols, interviews and surveys, getting informed consent, interpreter services and data analysis and storage. Our goal is to share the lessons we learned, and offer recommendations for future research in this area. Going beyond accommodations and accessibility, we hope these reflections contribute to a shift toward ASL-centric HCI research methodologies for working with the Deaf Community. 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  2. Video components are a central element of user interfaces that deliver content in a signed language (SL), but the potential of video components extends beyond content accessibility. Sign language videos may be designed as user interface elements: layered with interactive features to create navigation cues, page headings, and menu options. To be effective for signing users, novel sign language video-rich interfaces require informed design choices across many parameters. To align with the specific needs and shared conventions of the Deaf community and other ASL-signers in this context, we present a user study involving deaf ASL-signers who interacted with an array of designs for sign language video elements. Their responses offer some insights into how the Deaf community may perceive and prefer video elements to be designed, positioned, and implemented to guide user experiences. Through a qualitative analysis, we take initial steps toward understanding deaf ASL-signers’ perceptions of a set of emerging design principles, paving the way for future SL-centric user interfaces containing customized video elements and layouts with primary consideration for signed language-related usage and requirements. 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  3. Free, publicly-accessible full text available May 12, 2026
  4. In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, modalities, mental states and processes, and more. This analysis reveals variance in how authors report experiments, which creates challenges to understand, reproduce, and build on that research. It then describes an overarching experiment model that provides a formal structure for reporting HCI research with brain signals, including definitions, terminology, categories, and examples for each aspect. Multiple distinct reporting styles were identified through factor analysis and tied to different types of research. The paper concludes with recommendations and discusses future challenges. This creates actionable items from the abstract model and empirical observations to make HCI research with brain signals more reproducible and reusable. 
    more » « less
  5. Technology advances and lower equipment costs are enabling non-invasive, convenient recording of brain data outside of clinical settings in more real-world environments, and by non-experts. Despite the growing interest in and availability of brain signal datasets, most analytical tools are made for experts in the specific device technology, and have rigid constraints on the type of analysis available. We developed BrainEx to support interactive exploration and discovery within brain signals datasets. BrainEx takes advantage of algorithms that enable fast exploration of complex, large collections of time series data, while being easy to use and learn. This system enables researchers to perform similarity search, explore feature data and natural clustering, and select sequences of interest for future searches and exploration, while also maintaining the usability of a visual tool. In addition to describing the distributed architecture and visual design for BrainEx, this paper reports on a benchmark experiment showing that it outperforms other existing systems for similarity search. Additionally, we report on a preliminary user study in which domain experts used the visual exploration interface and affirmed that it meets the requirements. Finally, it presents a case study using BrainEx to explore real-world, domain-relevant data. 
    more » « less
  6. null (Ed.)
  7. Conducting human-centered research by, with, and for the ASL-signing Deaf community, requires rethinking current human-computer interaction processes in order to meet their linguistic and cultural needs and expectations. This paper highlights some key considerations that emerged in our work creating an ASL-based questionnaire, and our recommendations for handling them. 
    more » « less
  8. Automatic detection of an individual’s mind-wandering state has implications for designing and evaluating engaging and effective learning interfaces. While it is difficult to differentiate whether an individual is mind-wandering or focusing on the task only based on externally observable behavior, brain-based sensing offers unique insights to internal states. To explore the feasibility, we conducted a study using functional near-infrared spectroscopy (fNIRS) and investigated machine learning classifiers to detect mind-wandering episodes based on fNIRS data, both on an individual level and a group level, specifically focusing on automated window selection to improve classification results. For individual-level classification, by using a moving window method combined with a linear discriminant classifier, we found the best windows for classification and achieved a mean F1-score of 74.8%. For group-level classification, we proposed an individual-based time window selection (ITWS) algorithm to incorporate individual differences in window selection. The algorithm first finds the best window for each individual by using embedded individual-level classifiers and then uses these windows from all participants to build the final classifier. The performance of the ITWS algorithm is evaluated when used with eXtreme gradient boosting, convolutional neural networks, and deep neural networks. Our results show that the proposed algorithm achieved significant improvement compared to the previous state of the art in terms of brain-based classification of mind-wandering, with an average F1-score of 73.2%. This builds a foundation for mind-wandering detection for both the evaluation of multimodal learning interfaces and for future attention-aware systems. 
    more » « less